Mind-Wandering

Model Diagnostics

CACHE> loading loos_task from cache/vars/tms_analyses_loos_task.RData

loos_task$diffs
mod_selection_res <- as.data.frame(loos_task$diffs) %>% select(c(elpd_diff, se_diff))
models_elpd_diff <- c(
  "BV x AE + probe + block",
  "BV x AE + probe + block + condition",
  "BV x AE + probe + block + condition + randomization",
  "BV x AE + probe",
  "BV x AE"
  )
mod_selection_res <- data.frame(models_elpd_diff, mod_selection_res$elpd_diff, mod_selection_res$se_diff)
names(mod_selection_res) <- c("Model", "elpd_diff", "se_diff")

mod_selection_res = mod_selection_res  %>% 
  mutate(across(where(is.numeric), ~comma(., accuracy=0.01))) %>% 
  rename_all(~gsub("\\.", " ", .)) 

alignment= map_chr(mod_selection_res, ~ifelse(class(.x)=="numeric", "r","l"))
           elpd_diff se_diff
mod_task02   0.0       0.0  
mod_task03  -0.2       1.3  
mod_task04  -0.3       1.3  
mod_task01 -13.6       5.4  
mod_task00 -17.3       6.3  
mod_selection_res %>% 
  kbl(caption="LOO-CV: MW", align=alignment) %>% 
  kable_classic(full_width=TRUE, html_font="Times") %>% 
  row_spec(2, bold = TRUE, color = "red")
LOO-CV: MW
Model elpd_diff se_diff
BV x AE + probe + block 0.00 0.00
BV x AE + probe + block + condition -0.24 1.25
BV x AE + probe + block + condition + randomization -0.29 1.26
BV x AE + probe -13.58 5.44
BV x AE -17.27 6.33

Model Coefficients: lots of uncertainty around stimulation (90%-intervals)

CACHE> loading mod_task03 from cache/vars/tms_analyses_mod_task03.RData

Scale for 'y' is already present. Adding another scale for 'y', which will
replace the existing scale.
Fig. 1. Model 7: On-task Score: coefficients for all parameters of interest.

Fig. 1. Model 7: On-task Score: coefficients for all parameters of interest.

Results: MW
Hypothesis Estimate Est Error CI Lower CI Upper Evid Ratio Post Prob
BV < 0 -0.03 0.05 -0.13 0.06 3.37 0.77
AE > 0 0.06 0.05 -0.03 0.15 9.58 0.91
Block < 0 -0.11 0.02 -0.15 -0.07 Inf 1.00
Probe number < 0 -0.04 0.01 -0.07 -0.02 713.29 1.00
BV x AE< 0 -0.04 0.05 -0.14 0.06 3.75 0.79
Active rhTMS > 0 0.28 0.26 -0.23 0.79 6.52 0.87
Sham rhTMS > 0 -0.23 0.26 -0.75 0.26 0.22 0.18
Active arrhTMS > 0 0.06 0.26 -0.44 0.58 1.43 0.59
Sham arrhTMS > 0 -0.32 0.26 -0.84 0.19 0.12 0.11

On-task Score Across Conditons

Fig. 2. MW Mean ± 1SE  across conditions.

Fig. 2. MW Mean ± 1SE across conditions.

Sanity check: A Closer Look at Task Performance

Fig. 3. Mean ± 1SE changes of AE and BV across conditions.

Fig. 3. Mean ± 1SE changes of AE and BV across conditions.

Fig. 4. AE & BV versus On-task score: Mean ± 1SE across conditions. Our subjects are either bad at tapping with the metronome or misunderstood this part of the task.

Fig. 4. AE & BV versus On-task score: Mean ± 1SE across conditions. Our subjects are either bad at tapping with the metronome or misunderstood this part of the task.

AE x BV Interaction Across Conditions

Fig. 5. AE & BV versus On-task score: BV x AE interaction versus on-off task states. acrive rhTMS seems to flip the interaction: subjects have worse performance when they think they are on task and vice-versa.

Fig. 5. AE & BV versus On-task score: BV x AE interaction versus on-off task states. acrive rhTMS seems to flip the interaction: subjects have worse performance when they think they are on task and vice-versa.

Model fitting: BV

Model Diagnostics

CACHE> loading loos_bv from cache/vars/tms_analyses_loos_bv.RData

loos_bv
Output of model 'mod_bv00':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo   -754.0 37.1
p_loo        38.8  1.1
looic      1508.1 74.2
------
Monte Carlo SE of elpd_loo is 0.1.

All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.

Output of model 'mod_bv01':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo   -751.8 37.0
p_loo        40.3  1.1
looic      1503.5 74.1
------
Monte Carlo SE of elpd_loo is 0.1.

All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.

Output of model 'mod_bv02':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo   -752.8 36.9
p_loo        41.7  1.1
looic      1505.6 73.9
------
Monte Carlo SE of elpd_loo is 0.1.

All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.

Output of model 'mod_bv03':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo   -749.5 37.5
p_loo        44.7  1.2
looic      1499.1 75.0
------
Monte Carlo SE of elpd_loo is 0.1.

All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.

Model comparisons:
         elpd_diff se_diff
mod_bv03  0.0       0.0   
mod_bv01 -2.2       4.4   
mod_bv02 -3.3       4.0   
mod_bv00 -4.5       5.2   
as.data.frame(loos_bv$diffs)
  elpd_diff  se_diff  elpd_loo se_elpd_loo    p_loo se_p_loo    looic

mod_bv03 0.000000 0.000000 -749.5354 37.47914 44.65426 1.245530 1499.071 mod_bv01 -2.226843 4.354602 -751.7623 37.03687 40.26908 1.058170 1503.525 mod_bv02 -3.282269 3.992372 -752.8177 36.94252 41.68978 1.117909 1505.635 mod_bv00 -4.509851 5.198430 -754.0453 37.10229 38.84448 1.067001 1508.091 se_looic mod_bv03 74.95829 mod_bv01 74.07375 mod_bv02 73.88504 mod_bv00 74.20458

loos_bv$diffs
  elpd_diff se_diff

mod_bv03 0.0 0.0
mod_bv01 -2.2 4.4
mod_bv02 -3.3 4.0
mod_bv00 -4.5 5.2

mod_selection_res <- as.data.frame(loos_bv$diffs) %>% select(c(elpd_diff, se_diff))
models_elpd_diff <- c(
  "probe + block + condition + visit",
  "probe + block",
  "probe + block + condition",
  "probe"
  )
mod_selection_res <- data.frame(models_elpd_diff, mod_selection_res$elpd_diff, mod_selection_res$se_diff)
names(mod_selection_res) <- c("Model", "elpd_diff", "se_diff")

mod_selection_res = mod_selection_res  %>% 
  mutate(across(where(is.numeric), ~comma(., accuracy=0.01))) %>% 
  rename_all(~gsub("\\.", " ", .)) 

alignment= map_chr(mod_selection_res, ~ifelse(class(.x)=="numeric", "r","l"))
LOO-CV: BV
Model elpd_diff se_diff
probe + block + condition + visit 0.00 0.00
probe + block -2.23 4.35
probe + block + condition -3.28 3.99
probe -4.51 5.20

Model Coefficients: lots of uncertainty around stimulation (90%-intervals)

CACHE> loading mod_bv03 from cache/vars/tms_analyses_mod_bv03.RData

## Scale for 'y' is already present. Adding another scale for 'y', which will
## replace the existing scale.
Fig. 4. Model 7: On-task Score: coefficients for all parameters of interest.

Fig. 4. Model 7: On-task Score: coefficients for all parameters of interest.

Results: BV
Hypothesis Estimate Est Error CI Lower CI Upper Evid Ratio Post Prob
Block > 0 -0.02 0.01 -0.04 0.00 0.01 0.01
Probe number > 0 0.00 0.01 -0.01 0.01 1.17 0.54
Active rhTMS > 0 0.18 0.09 0.01 0.35 50.02 0.98
Sham rhTMS > 0 0.04 0.09 -0.12 0.22 2.36 0.70
Active arrhTMS > 0 0.13 0.09 -0.04 0.30 13.90 0.93
Sham arrhTMS > 0 0.04 0.08 -0.12 0.21 2.21 0.69

Model fitting: AE

Model Diagnostics

loos_apen=load.cache.var("loos_apen", base=bname)
loos_apen
CACHE> loading loos_apen from cache/vars/tms_analyses_loos_apen.RData
Output of model 'mod_apen00':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo  -1053.1 26.8
p_loo        31.2  1.8
looic      2106.1 53.5
------
Monte Carlo SE of elpd_loo is 0.1.

All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.

Output of model 'mod_apen01':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo  -1052.3 26.7
p_loo        32.1  1.9
looic      2104.6 53.5
------
Monte Carlo SE of elpd_loo is 0.1.

Pareto k diagnostic values:
                         Count Pct.    Min. n_eff
(-Inf, 0.5]   (good)     879   99.9%   3210      
 (0.5, 0.7]   (ok)         1    0.1%   757       
   (0.7, 1]   (bad)        0    0.0%   <NA>      
   (1, Inf)   (very bad)   0    0.0%   <NA>      

All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'mod_apen02':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo  -1053.3 26.8
p_loo        32.5  2.0
looic      2106.5 53.6
------
Monte Carlo SE of elpd_loo is NA.

Pareto k diagnostic values:
                         Count Pct.    Min. n_eff
(-Inf, 0.5]   (good)     878   99.8%   4808      
 (0.5, 0.7]   (ok)         1    0.1%   1443      
   (0.7, 1]   (bad)        1    0.1%   224       
   (1, Inf)   (very bad)   0    0.0%   <NA>      
See help('pareto-k-diagnostic') for details.

Output of model 'mod_apen03':

Computed from 10000 by 880 log-likelihood matrix

         Estimate   SE
elpd_loo  -1053.7 26.8
p_loo        33.5  1.9
looic      2107.5 53.5
------
Monte Carlo SE of elpd_loo is 0.1.

Pareto k diagnostic values:
                         Count Pct.    Min. n_eff
(-Inf, 0.5]   (good)     879   99.9%   2730      
 (0.5, 0.7]   (ok)         1    0.1%   732       
   (0.7, 1]   (bad)        0    0.0%   <NA>      
   (1, Inf)   (very bad)   0    0.0%   <NA>      

All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.

Model comparisons:
           elpd_diff se_diff
mod_apen01  0.0       0.0   
mod_apen00 -0.8       2.0   
mod_apen02 -1.0       2.0   
mod_apen03 -1.4       2.0   

Model Coefficients: lots of uncertainty around stimulation (90%-intervals)

as.data.frame(loos_apen$diffs)
     elpd_diff  se_diff  elpd_loo se_elpd_loo    p_loo se_p_loo    looic

mod_apen01 0.0000000 0.000000 -1052.285 26.72867 32.12331 1.859763 2104.570 mod_apen00 -0.7681244 1.986805 -1053.053 26.76322 31.22394 1.841387 2106.106 mod_apen02 -0.9709216 1.967294 -1053.256 26.82159 32.47110 1.988178 2106.512 mod_apen03 -1.4497045 2.012275 -1053.735 26.75017 33.49944 1.923370 2107.469 se_looic mod_apen01 53.45735 mod_apen00 53.52645 mod_apen02 53.64319 mod_apen03 53.50033

loos_apen$diffs
    elpd_diff se_diff

mod_apen01 0.0 0.0
mod_apen00 -0.8 2.0
mod_apen02 -1.0 2.0
mod_apen03 -1.4 2.0

mod_selection_res <- as.data.frame(loos_apen$diffs) %>% select(c(elpd_diff, se_diff))
models_elpd_diff <- c(
  "probe + block",
  "probe",
  "probe + block + condition",
  "probe + block + condition + visit"
  )
mod_selection_res <- data.frame(models_elpd_diff, mod_selection_res$elpd_diff, mod_selection_res$se_diff)
names(mod_selection_res) <- c("Model", "elpd_diff", "se_diff")

mod_selection_res = mod_selection_res  %>% 
  mutate(across(where(is.numeric), ~comma(., accuracy=0.01))) %>% 
  rename_all(~gsub("\\.", " ", .)) 

alignment= map_chr(mod_selection_res, ~ifelse(class(.x)=="numeric", "r","l"))
LOO-CV: AE
Model elpd_diff se_diff
probe + block 0.00 0.00
probe -0.77 1.99
probe + block + condition -0.97 1.97
probe + block + condition + visit -1.45 2.01

CACHE> loading mod_apen03 from cache/vars/tms_analyses_mod_apen03.RData

## Scale for 'y' is already present. Adding another scale for 'y', which will
## replace the existing scale.
Fig. 4. Model 7: On-task Score: coefficients for all parameters of interest.

Fig. 4. Model 7: On-task Score: coefficients for all parameters of interest.

Results: AE
Hypothesis Estimate Est Error CI Lower CI Upper Evid Ratio Post Prob
Block < 0 -0.02 0.01 -0.05 0.00 23.81 0.96
Probe number < 0 0.02 0.01 0.00 0.04 0.01 0.01
Active rhTMS < 0 -0.27 0.12 -0.50 -0.04 83.03 0.99
Sham rhTMS < 0 -0.22 0.12 -0.45 0.01 32.78 0.97
Active arrhTMS < 0 -0.23 0.12 -0.47 -0.01 43.25 0.98
Sham arrhTMS < 0 -0.12 0.12 -0.35 0.11 5.97 0.86

Non-Parametric ANOVA

Shapiro Test

### Non-parametric ANOVA (Kruskal-Wallis rank sum test)

shapiro.test(tms_data.nback$probe.response)
shapiro.test(tms_data.nback$zlog.apen)
shapiro.test(tms_data.nback$bv)

    Shapiro-Wilk normality test

data:  tms_data.nback$probe.response
W = 0.87298, p-value < 2.2e-16


    Shapiro-Wilk normality test

data:  tms_data.nback$zlog.apen
W = 0.9406, p-value < 2.2e-16


    Shapiro-Wilk normality test

data:  tms_data.nback$bv
W = 0.5741, p-value < 2.2e-16

Distributions are non-normal

MW: Kruskal-Wallis Test & Post-Hoc Multiple Comparisons

kruskal.test(probe.response ~ condition, data = tms_data.nback) # groups are different

    Kruskal-Wallis rank sum test

data:  probe.response by condition
Kruskal-Wallis chi-squared = 15.76, df = 4, p-value = 0.003359

MW: Wilcoxon test to zoom in at the significant contrast

New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
pairwise_wilcox_results %>% 
  kbl(caption="Pairwise Wilcoxon test: MW", align=alignment) %>% 
  kable_classic(full_width=TRUE, html_font="Times") %>% 
  row_spec(c(3, 4, 6, 7), bold = TRUE) %>% 
  row_spec(1, color = "red")
Pairwise Wilcoxon test: MW
Comparison W-statistic p-value Effect size, r
baseline vs. active rhTMS 18312 0.41 0.04
baseline vs. active arrhTMS 20483 0.23 0.06
baseline vs. sham rhTMS 21615 0.02 0.11
baseline vs. sham arrhTMS 22196 0.00 0.14
active rhTMS vs. active arrhTMS 14191 0.07 0.10
active rhTMS vs. sham rhTMS 14899 0.01 0.15
active rhTMS vs. sham arrhTMS 15308 0.00 0.18
active arrhTMS vs. sham rhTMS 13626 0.29 0.06
active arrhTMS vs. sham arrhTMS 13944 0.14 0.08
sham rhTMS vs. sham arrhTMS 13088 0.71 0.02

AE: Kruskal-Wallis Test & Post-Hoc Multiple Comparisons

kruskal.test(zlog.apen ~ condition, data = tms_data.nback) # groups are different

    Kruskal-Wallis rank sum test

data:  zlog.apen by condition
Kruskal-Wallis chi-squared = 9.6253, df = 4, p-value = 0.04724
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
New names:
* NA -> ...1
* NA -> ...2
* NA -> ...3
pairwise_wilcox_results %>% 
  kbl(caption="Pairwise Wilcoxon test: AE", align=alignment) %>% 
  kable_classic(full_width=TRUE, html_font="Times") %>% 
  row_spec(c(1, 2), bold = TRUE) %>% 
  row_spec(1, color = "red")
Pairwise Wilcoxon test: AE
Comparison W-statistic p-value Effect size, r
baseline vs. active rhTMS 22138.5 0.01 0.13
baseline vs. active arrhTMS 21689.0 0.03 0.11
baseline vs. sham rhTMS 21311.5 0.06 0.09
baseline vs. sham arrhTMS 19889.5 0.54 0.03
active rhTMS vs. active arrhTMS 12561.5 0.77 0.02
active rhTMS vs. sham rhTMS 12431.0 0.66 0.02
active rhTMS vs. sham arrhTMS 11337.5 0.08 0.10
active arrhTMS vs. sham rhTMS 12673.5 0.88 0.01
active arrhTMS vs. sham arrhTMS 11567.5 0.14 0.08
sham rhTMS vs. sham arrhTMS 11767.5 0.21 0.07
pairwise_wilcox_results <- data.frame(matrix(ncol = 3, nrow = 10))
colnames(pairwise_wilcox_results) <- c("Comparison", "W-statistic", "p-value")
pairwise_wilcox_results$Comparison <- tms_comparison_list

i = 1
 for (comparison in tms_comparison_list) {
   r <- wilcox.test(filter(tms_data.nback, condition == comparison[1])$zbv, filter(tms_data.nback, condition == comparison[2])$zbv, paired = FALSE)
  pairwise_wilcox_results[i, 2:3] <- rbind( bind_cols(r$statistic, r$p.value))
  i = i + 1
  
 }
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
New names:
* NA -> ...1
* NA -> ...2
pairwise_wilcox_results <- pairwise_wilcox_results %>% 
  mutate(across("p-value", ~comma(., accuracy=0.01)), Comparison = comparisons) %>% 
  rename_all(~gsub("\\.", " ", .)) 
pairwise_wilcox_results %>% 
  kbl(caption="Pairwise Wilcoxon test: BV", align=alignment) %>% 
  kable_classic(full_width=TRUE, html_font="Times") %>% 
  row_spec(c(2), bold = TRUE) %>% 
  row_spec(1, color = "red")
Pairwise Wilcoxon test: BV
Comparison W-statistic p-value
baseline vs. active rhTMS 16467 0.02
baseline vs. active arrhTMS 16813 0.04
baseline vs. sham rhTMS 18209 0.38
baseline vs. sham arrhTMS 17964 0.28
active rhTMS vs. active arrhTMS 13154 0.67
active rhTMS vs. sham rhTMS 13859 0.20
active rhTMS vs. sham arrhTMS 13786 0.23
active arrhTMS vs. sham rhTMS 13625 0.32
active arrhTMS vs. sham arrhTMS 13615 0.32
sham rhTMS vs. sham arrhTMS 12678 0.88

AE: Wilcoxon test to zoom in at the significant contrast

wilcox.test(filter(tms_data.nback_z, (`Condition` == "baseline" & `Measure` == "zlog.apen"))$`Z-score`,
            filter(tms_data.nback_z, (`Condition` == "active_rhTMS" & `Measure` == "zlog.apen"))$`Z-score`,
            paired = FALSE)

rstatix::wilcox_effsize(tms_data.nback, zlog.apen ~ condition, paired = FALSE)

    Wilcoxon rank sum test with continuity correction

data:  filter(tms_data.nback_z, (Condition == "baseline" & Measure == "zlog.apen"))$`Z-score` and filter(tms_data.nback_z, (Condition == "active_rhTMS" & Measure == "zlog.apen"))$`Z-score`
W = 22138, p-value = 0.009497
alternative hypothesis: true location shift is not equal to 0

# A tibble: 10 x 7
   .y.       group1         group2         effsize    n1    n2 magnitude
 * <chr>     <chr>          <chr>            <dbl> <int> <int> <ord>    
 1 zlog.apen baseline       active_rhTMS   0.130     240   160 small    
 2 zlog.apen baseline       active_arrhTMS 0.110     240   160 small    
 3 zlog.apen baseline       sham_rhTMS     0.0932    240   160 small    
 4 zlog.apen baseline       sham_arrhTMS   0.0304    240   160 small    
 5 zlog.apen active_rhTMS   active_arrhTMS 0.0161    160   160 small    
 6 zlog.apen active_rhTMS   sham_rhTMS     0.0249    160   160 small    
 7 zlog.apen active_rhTMS   sham_arrhTMS   0.0988    160   160 small    
 8 zlog.apen active_arrhTMS sham_rhTMS     0.00855   160   160 small    
 9 zlog.apen active_arrhTMS sham_arrhTMS   0.0833    160   160 small    
10 zlog.apen sham_rhTMS     sham_arrhTMS   0.0697    160   160 small    

BV: Kruskal-Wallis Test & Post-Hoc Multiple Comparisons

kruskal.test(zbv ~ condition, data = tms_data.nback) # groups are NOT different
tms_data.nback$bv_ranks <- rank(tms_data.nback$zbv)
by(tms_data.nback$bv_ranks, tms_data.nback$condition, mean) # where the difference lies
pgirmess::kruskalmc(zbv ~ condition, data = tms_data.nback, cont = "two-tailed")
pgirmess::kruskalmc(zbv ~ condition, data = tms_data.nback)

    Kruskal-Wallis rank sum test

data:  zbv by condition
Kruskal-Wallis chi-squared = 7.4528, df = 4, p-value = 0.1138

tms_data.nback$condition: baseline
[1] 409.8875
------------------------------------------------------------ 
tms_data.nback$condition: active_rhTMS
[1] 472.575
------------------------------------------------------------ 
tms_data.nback$condition: active_arrhTMS
[1] 463.4563
------------------------------------------------------------ 
tms_data.nback$condition: sham_rhTMS
[1] 434.1562
------------------------------------------------------------ 
tms_data.nback$condition: sham_arrhTMS
[1] 437.7312
Multiple comparison test after Kruskal-Wallis, treatments vs control (two-tailed) 
p.value: 0.05 
Comparisons
                         obs.dif critical.dif difference
baseline-active_rhTMS   62.68750     64.79542      FALSE
baseline-active_arrhTMS 53.56875     64.79542      FALSE
baseline-sham_rhTMS     24.26875     64.79542      FALSE
baseline-sham_arrhTMS   27.84375     64.79542      FALSE
Multiple comparison test after Kruskal-Wallis 
p.value: 0.05 
Comparisons
                             obs.dif critical.dif difference
baseline-active_rhTMS       62.68750     72.82000      FALSE
baseline-active_arrhTMS     53.56875     72.82000      FALSE
baseline-sham_rhTMS         24.26875     72.82000      FALSE
baseline-sham_arrhTMS       27.84375     72.82000      FALSE
active_rhTMS-active_arrhTMS  9.11875     79.77032      FALSE
active_rhTMS-sham_rhTMS     38.41875     79.77032      FALSE
active_rhTMS-sham_arrhTMS   34.84375     79.77032      FALSE
active_arrhTMS-sham_rhTMS   29.30000     79.77032      FALSE
active_arrhTMS-sham_arrhTMS 25.72500     79.77032      FALSE
sham_rhTMS-sham_arrhTMS      3.57500     79.77032      FALSE

Appendices

Histograms

AE and BV distributionsAE and BV distributionsAE and BV distributions

AE and BV distributions

Subject-wise. Better zoom in RStudio output

AE and BV distributions

AE and BV distributions

tms_data.nback_z %>%
  ggplot(aes(x= `On-task Score`, y =`Z-score`, group = `Measure`, color = `Measure`)) + 
  geom_pointrange(stat="summary", fun.data=mean_se, fun.args = list(mult=1), position=position_dodge(0.05)) +
  geom_line(stat="summary", fun.data=mean_se, fun.args = list(mult=2)) +
  facet_wrap(~`Condition`) +
  scale_color_manual(labels = c("AE", "BV"), values = c("blue", "red")) +
  theme(text = element_text(size = 15)) 

tms_data.nback_z %>%
  ggplot(aes(x= `Focus`, y =`Z-score`,  group = `Measure`, color=`Measure`)) + 
  geom_pointrange(stat="summary", fun.data=mean_se, fun.args = list(mult=1), position=position_dodge(0.05)) +
  geom_line(stat="summary", fun.data=mean_se, fun.args = list(mult=2)) +
  scale_color_manual(labels = c("AE", "BV"), values = c("blue", "red")) +
  facet_wrap(~ `Condition`)+
  theme(text = element_text(size = 18))